559 research outputs found

    Development of Some Spatial-domain Preprocessing and Post-processing Algorithms for Better 2-D Up-scaling

    Get PDF
    Image super-resolution is an area of great interest in recent years and is extensively used in applications like video streaming, multimedia, internet technologies, consumer electronics, display and printing industries. Image super-resolution is a process of increasing the resolution of a given image without losing its integrity. Its most common application is to provide better visual effect after resizing a digital image for display or printing. One of the methods of improving the image resolution is through the employment of a 2-D interpolation. An up-scaled image should retain all the image details with very less degree of blurring meant for better visual quality. In literature, many efficient 2-D interpolation schemes are found that well preserve the image details in the up-scaled images; particularly at the regions with edges and fine details. Nevertheless, these existing interpolation schemes too give blurring effect in the up-scaled images due to the high frequency (HF) degradation during the up-sampling process. Hence, there is a scope to further improve their performance through the incorporation of various spatial domain pre-processing, post-processing and composite algorithms. Therefore, it is felt that there is sufficient scope to develop various efficient but simple pre-processing, post-processing and composite schemes to effectively restore the HF contents in the up-scaled images for various online and off-line applications. An efficient and widely used Lanczos-3 interpolation is taken for further performance improvement through the incorporation of various proposed algorithms. The various pre-processing algorithms developed in this thesis are summarized here. The term pre-processing refers to processing the low-resolution input image prior to image up-scaling. The various pre-processing algorithms proposed in this thesis are: Laplacian of Laplacian based global pre-processing (LLGP) scheme; Hybrid global pre-processing (HGP); Iterative Laplacian of Laplacian based global pre-processing (ILLGP); Unsharp masking based pre-processing (UMP); Iterative unsharp masking (IUM); Error based up-sampling(EU) scheme. The proposed algorithms: LLGP, HGP and ILLGP are three spatial domain preprocessing algorithms which are based on 4th, 6th and 8th order derivatives to alleviate nonuniform blurring in up-scaled images. These algorithms are used to obtain the high frequency (HF) extracts from an image by employing higher order derivatives and perform precise sharpening on a low resolution image to alleviate the blurring in its 2-D up-sampled counterpart. In case of unsharp masking based pre-processing (UMP) scheme, the blurred version of a low resolution image is used for HF extraction from the original version through image subtraction. The weighted version of the HF extracts are superimposed with the original image to produce a sharpened image prior to image up-scaling to counter blurring effectively. IUM makes use of many iterations to generate an unsharp mask which contains very high frequency (VHF) components. The VHF extract is the result of signal decomposition in terms of sub-bands using the concept of analysis filter bank. Since the degradation of VHF components is maximum, restoration of such components would produce much better restoration performance. EU is another pre-processing scheme in which the HF degradation due to image upscaling is extracted and is called prediction error. The prediction error contains the lost high frequency components. When this error is superimposed on the low resolution image prior to image up-sampling, blurring is considerably reduced in the up-scaled images. Various post-processing algorithms developed in this thesis are summarized in following. The term post-processing refers to processing the high resolution up-scaled image. The various post-processing algorithms proposed in this thesis are: Local adaptive Laplacian (LAL); Fuzzy weighted Laplacian (FWL); Legendre functional link artificial neural network(LFLANN). LAL is a non-fuzzy, local based scheme. The local regions of an up-scaled image with high variance are sharpened more than the region with moderate or low variance by employing a local adaptive Laplacian kernel. The weights of the LAL kernel are varied as per the normalized local variance so as to provide more degree of HF enhancement to high variance regions than the low variance counterpart to effectively counter the non-uniform blurring. Furthermore, FWL post-processing scheme with a higher degree of non-linearity is proposed to further improve the performance of LAL. FWL, being a fuzzy based mapping scheme, is highly nonlinear to resolve the blurring problem more effectively than LAL which employs a linear mapping. Another LFLANN based post-processing scheme is proposed here to minimize the cost function so as to reduce the blurring in a 2-D up-scaled image. Legendre polynomials are used for functional expansion of the input pattern-vector and provide high degree of nonlinearity. Therefore, the requirement of multiple layers can be replaced by single layer LFLANN architecture so as to reduce the cost function effectively for better restoration performance. With single layer architecture, it has reduced the computational complexity and hence is suitable for various real-time applications. There is a scope of further improvement of the stand-alone pre-processing and postprocessing schemes by combining them through composite schemes. Here, two spatial domain composite schemes, CS-I and CS-II are proposed to tackle non-uniform blurring in an up-scaled image. CS-I is developed by combining global iterative Laplacian (GIL) preprocessing scheme with LAL post-processing scheme. Another highly nonlinear composite scheme, CS-II is proposed which combines ILLGP scheme with a fuzzy weighted Laplacian post-processing scheme for more improved performance than the stand-alone schemes. Finally, it is observed that the proposed algorithms: ILLGP, IUM, FWL, LFLANN and CS-II are better algorithms in their respective categories for effectively reducing blurring in the up-scaled images

    Tracking Evolving labels using Cone based Oracles

    Full text link
    The evolving data framework was first proposed by Anagnostopoulos et al., where an evolver makes small changes to a structure behind the scenes. Instead of taking a single input and producing a single output, an algorithm judiciously probes the current state of the structure and attempts to continuously maintain a sketch of the structure that is as close as possible to its actual state. There have been a number of problems that have been studied in the evolving framework including our own work on labeled trees. We were motivated by the problem of maintaining a labeling in the plane, where updating the labels require physically moving them. Applications involve tracking evolving disease hot-spots via mobile testing units , and tracking unmanned aerial vehicles. To be specific, we consider the problem of tracking labeled nodes in the plane, where an evolver continuously swaps labels of any two nearby nodes in the background unknown to us. We are tasked with maintaining a hypothesis, an approximate sketch of the locations of these labels, which we can only update by physically moving them over a sparse graph. We assume the existence of an Oracle, which when suitably probed, guides us in fixing our hypothesis.Comment: This is an abstract of a presentation given at CG:YRF 2023. It has been made public for the benefit of the community and should be considered a preprint rather than a formally reviewed paper. Thus, this work is expected to appear in a conference with formal proceedings and/or in a journa

    Acute necrotizing pancreatitis in a pregnant female diagnosed after caesarian delivery: a case report

    Get PDF
    Acute pancreatitis (AP) is a rare event in pregnancy, occurring in approximately 3 in 10,000 pregnancies. The spectrum of AP in pregnancy ranges from mild pancreatitis to serious pancreatitis associated with necrosis, abscesses, pseudocysts and multiple organ dysfunction syndromes. A 21 years old, primigravida presented to labour room at 33 weeks 2 days of gestation with complaint of abdominal pain. Per vulval finding showed pin-point vagina. (patient had history of transverse vaginal septum, and was operated for the same before conception). Patient was operated for caesarian delivery and Fenton’s repair done. Contrast-enhanced computed tomography showed signs of acute necrotizing pancreatitis with peripancreatic collection. AP in pregnancy remains a challenging clinical problem to manage. The general management of AP in pregnancy is supportive

    2048 Without New Tiles Is Still Hard

    Get PDF
    We study the computational complexity of a variant of the popular 2048 game in which no new tiles are generated after each move. As usual, instances are defined on rectangular boards of arbitrary size. We consider the natural decision problems of achieving a given constant tile value, score or number of moves. We also consider approximating the maximum achievable value for these three objectives. We prove all these problems are NP-hard by a reduction from 3SAT. Furthermore, we consider potential extensions of these results to a similar variant of the Threes! game. To this end, we report on a peculiar motion pattern, that is not possible in 2048, which we found much harder to control by similar board designs

    Approximate optimal control model for visual search tasks

    Get PDF
    Visual search is a cognitive process that makes use of eye movements to bring the relatively high acuity fovea to bear on areas of interest to aid in navigation or interaction within the environment. This thesis explores a novel hypothesis that human visual search behaviour emerges as an adaptation to the underlying human information processing constraint, task utility and ecology. A new computational model (Computationally Rational Visual Search (CRVS) model) for visual search is also presented that provides a mathematical formulation for the hypothesis. Through the model, we ask the question, what mechanism and strategy a rational agent would use to move gaze and when should it stop searching? The CRVS model formulates the novel hypothesis for visual search as a Partially Observable Markov Decision Process (POMDP). The POMDP provides a mathematical framework to model visual search as a optimal adaptation to both top-down and bottom-up mechanisms. Specifically, the agent is only able to partially observe the environment due to the bounds imposed by the human visual system. The agent learns to make a decision based on the partial information it obtained and a feedback signal. The POMDP formulation is very general and it can be applied to a range of problems. However, finding an optimal solution to a POMDP is computationally expensive. In this thesis, we use machine learning to find an approximately optimal solution to the POMDP. Specifically, we use a deep reinforcement learning (Asynchronous Advantage Actor-Critic) algorithm to solve the POMDP. The thesis answers the where to fixate next and when to stop search questions using three different visual search tasks. In Chapter 4 we investigate the computationally rational strategies for when to stop search using a real-world search task of images on a web page. In Chapter 5, we investigate computationally rational strategies for where to look next when guided by low-level feature cues like colour, shape, size. Finally, in Chapter 6, we combine the approximately optimal strategies learned from the previous chapters for a conjunctive visual search task (Distractor-Ratio task) where the model needs to answer both when to stop and where to search question. The results show that visual search strategies can be explained as an approximately optimal adaptation to the theory of information processing constraints, utility and ecology of the task

    Assessment of Spontaneous Heating of Coals by Thermal Analysis Technique

    Get PDF
    Coal mine fire is a major issue not only in India but also all over the world. But around 80% coal fires occurring in Indian coal mines are due to spontaneous heating of coal. Due to coal fire many precious lives are lost, and economical loss also occur to the organization and to the nation. With rapid growth of India, the requirements for energy are increasing rapidly for industrial sectors and domestic use. To cater the increased demand for energy, we are relying on thermal power plants. So more stress is now given on high production of coal. With increase in coal production the spontaneous heating incidents are also bound to increase. That’s why special attention needs to be given for more study on spontaneous heating susceptibility of coal. So that early precautionary measures could be taken to prevent the coal to catch fire. In this study, 26 coal samples were collected from different coal fields of India viz. MCL, SECL, SCCL, ECL, CCL, BCCL, NECL, NCL. All the samples of coal were studied using Differential thermal analysis and proximate analysis was also done in order to get the intrinsic properties of the coal. Using the obtained thermogram from DTA study, the coal samples were classified into three categories based on their potential to spontaneous heating. Correlation study was done in order to find the correlation in between intrinsic properties of coal and the key indicators of spontaneous heating from thermogram viz. onset temperature, slope IIA, slope IIB, overall slope II. From the study of the thermogram, 14 samples were found to be highly prone to spontaneous heating while rest 6 each were moderately susceptible and poorly susceptible to spontaneous heating. The slope value of Stage II A is a better indicator of determining spontaneous heating. The slope of stage IIA was found to be higher for highly susceptible coals. The correlation study confirmed that the moisture content of coal is a key factor affecting the spontaneous heating of coal

    Perceptions of climate change: Applying assessments to policy and practice

    Get PDF
    The National Climate Change Impact Survey 2016 (NCCIS) asked households across Nepal to recall their individual experiences of short-term weather variations, impacts and their adaptation responses. This paper uses examples from Nepal and elsewhere to explore how survey data based on people’s perceptions can inform understanding and policy-making on climate change impacts, vulnerability and adaptation. We are able to distinguish four areas of policy and practice where such data has made particularly valuable contributions
    corecore